事件摄像机对场景的亮度变化异步,独立于每个像素。由于属性,这些相机具有不同的特征:高动态范围(HDR),高时间分辨率和低功耗。但是,应将事件摄像机的结果处理为计算机视觉任务的替代表示。另外,它们通常很嘈杂,并且在几乎没有事件的地区导致性能不佳。近年来,许多研究人员试图重建事件中的视频。但是,由于缺乏不规则和不连续数据的时间信息,它们没有提供高质量的视频。为了克服这些困难,我们引入了一个E2V-SDE,该E2V-SDE由随机微分方程(SDE)控制在潜在空间中。因此,E2V-SDE可以在任意时间步骤中快速重建图像,并对看不见的数据做出现实的预测。此外,我们成功采用了各种图像组成技术来提高图像清晰度和时间一致性。通过对模拟和实际场景数据集进行广泛的实验,我们验证了我们的模型在各种视频重建设置下的表现优于最先进的方法。就图像质量而言,LPIPS得分提高了12%,重建速度比ET-NET高87%。
translated by 谷歌翻译
尖峰神经网络(SNNS)模仿大脑中信息传播可以通过离散和稀疏的尖峰来能够能够通过离散和稀疏的尖峰来处理时空信息,从而受到相当大的关注。为了提高SNN的准确性和能源效率,大多数以前的研究仅集中在训练方法上,并且很少研究建筑的效果。我们研究了先前研究中使用的设计选择,从尖峰的准确性和数量来看,发现它们不是最适合SNN的。为了进一步提高准确性并减少SNN产生的尖峰,我们提出了一个称为Autosnn的尖峰感知神经体系结构搜索框架。我们定义一个搜索空间,该搜索空间由架构组成,而没有不良的设计选择。为了启用Spike-Aware Architecture搜索,我们引入了一种健身,该健身既考虑尖峰的准确性和数量。 Autosnn成功地搜索了SNN体系结构,这些体系结构在准确性和能源效率方面都超过了手工制作的SNN。我们彻底证明了AutoSNN在包括神经形态数据集在内的各种数据集上的有效性。
translated by 谷歌翻译
由于其事件驱动的计算,尖峰神经网络(SNN)已成为常规人工神经网络(ANN)的节能替代方案。考虑到SNN模型的未来部署到限制神经形态设备上,许多研究应用了最初用于ANN模型压缩的技术,例如网络量化,修剪和知识蒸馏,用于SNN。其中,关于知识蒸馏的现有作品报告了学生SNN模型的准确性提高。但是,对能源效率的分析也是SNN的重要特征。在本文中,我们从准确性和能源效率方面彻底分析了蒸馏SNN模型的性能。在此过程中,我们观察到使用常规知识蒸馏方法时,尖峰数量大幅增加,导致能量效率低下。基于此分析,为了达到能源效率,我们提出了一种具有异质温度参数的新知识蒸馏方法。我们在两个不同的数据集上评估我们的方法,并表明由此产生的SNN学生满足了尖峰数量的准确性和减少。在MNIST数据集上,我们提议的学生SNN的精度高达0.09%,与接受常规知识蒸馏方法的学生SNN相比,SNN的峰值降低了65%。我们还将结果与其他SNN压缩技术和训练方法进行了比较。
translated by 谷歌翻译
The automated segmentation and tracking of macrophages during their migration are challenging tasks due to their dynamically changing shapes and motions. This paper proposes a new algorithm to achieve automatic cell tracking in time-lapse microscopy macrophage data. First, we design a segmentation method employing space-time filtering, local Otsu's thresholding, and the SUBSURF (subjective surface segmentation) method. Next, the partial trajectories for cells overlapping in the temporal direction are extracted in the segmented images. Finally, the extracted trajectories are linked by considering their direction of movement. The segmented images and the obtained trajectories from the proposed method are compared with those of the semi-automatic segmentation and manual tracking. The proposed tracking achieved 97.4% of accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. We expect that the automatically extracted trajectories of macrophages can provide pieces of evidence of how macrophages migrate depending on their polarization modes in the situation, such as during wound healing.
translated by 谷歌翻译
Data-centric AI has shed light on the significance of data within the machine learning (ML) pipeline. Acknowledging its importance, various research and policies are suggested by academia, industry, and government departments. Although the capability of utilizing existing data is essential, the capability to build a dataset has become more important than ever. In consideration of this trend, we propose a "Data Management Operation and Recipes" that will guide the industry regardless of the task or domain. In other words, this paper presents the concept of DMOps derived from real-world experience. By offering a baseline for building data, we want to help the industry streamline its data operation optimally.
translated by 谷歌翻译
According to the rapid development of drone technologies, drones are widely used in many applications including military domains. In this paper, a novel situation-aware DRL- based autonomous nonlinear drone mobility control algorithm in cyber-physical loitering munition applications. On the battlefield, the design of DRL-based autonomous control algorithm is not straightforward because real-world data gathering is generally not available. Therefore, the approach in this paper is that cyber-physical virtual environment is constructed with Unity environment. Based on the virtual cyber-physical battlefield scenarios, a DRL-based automated nonlinear drone mobility control algorithm can be designed, evaluated, and visualized. Moreover, many obstacles exist which is harmful for linear trajectory control in real-world battlefield scenarios. Thus, our proposed autonomous nonlinear drone mobility control algorithm utilizes situation-aware components those are implemented with a Raycast function in Unity virtual scenarios. Based on the gathered situation-aware information, the drone can autonomously and nonlinearly adjust its trajectory during flight. Therefore, this approach is obviously beneficial for avoiding obstacles in obstacle-deployed battlefields. Our visualization-based performance evaluation shows that the proposed algorithm is superior from the other linear mobility control algorithms.
translated by 谷歌翻译
This paper proposes a new regularization algorithm referred to as macro-block dropout. The overfitting issue has been a difficult problem in training large neural network models. The dropout technique has proven to be simple yet very effective for regularization by preventing complex co-adaptations during training. In our work, we define a macro-block that contains a large number of units from the input to a Recurrent Neural Network (RNN). Rather than applying dropout to each unit, we apply random dropout to each macro-block. This algorithm has the effect of applying different drop out rates for each layer even if we keep a constant average dropout rate, which has better regularization effects. In our experiments using Recurrent Neural Network-Transducer (RNN-T), this algorithm shows relatively 4.30 % and 6.13 % Word Error Rates (WERs) improvement over the conventional dropout on LibriSpeech test-clean and test-other. With an Attention-based Encoder-Decoder (AED) model, this algorithm shows relatively 4.36 % and 5.85 % WERs improvement over the conventional dropout on the same test sets.
translated by 谷歌翻译
Affect understanding capability is essential for social robots to autonomously interact with a group of users in an intuitive and reciprocal way. However, the challenge of multi-person affect understanding comes from not only the accurate perception of each user's affective state (e.g., engagement) but also the recognition of the affect interplay between the members (e.g., joint engagement) that presents as complex, but subtle, nonverbal exchanges between them. Here we present a novel hybrid framework for identifying a parent-child dyad's joint engagement by combining a deep learning framework with various video augmentation techniques. Using a dataset of parent-child dyads reading storybooks together with a social robot at home, we first train RGB frame- and skeleton-based joint engagement recognition models with four video augmentation techniques (General Aug, DeepFake, CutOut, and Mixed) applied datasets to improve joint engagement classification performance. Second, we demonstrate experimental results on the use of trained models in the robot-parent-child interaction context. Third, we introduce a behavior-based metric for evaluating the learned representation of the models to investigate the model interpretability when recognizing joint engagement. This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
translated by 谷歌翻译
Training agents via off-policy deep reinforcement learning (RL) requires a large memory, named replay memory, that stores past experiences used for learning. These experiences are sampled, uniformly or non-uniformly, to create the batches used for training. When calculating the loss function, off-policy algorithms assume that all samples are of the same importance. In this paper, we hypothesize that training can be enhanced by assigning different importance for each experience based on their temporal-difference (TD) error directly in the training objective. We propose a novel method that introduces a weighting factor for each experience when calculating the loss function at the learning stage. In addition to improving convergence speed when used with uniform sampling, the method can be combined with prioritization methods for non-uniform sampling. Combining the proposed method with prioritization methods improves sampling efficiency while increasing the performance of TD-based off-policy RL algorithms. The effectiveness of the proposed method is demonstrated by experiments in six environments of the OpenAI Gym suite. The experimental results demonstrate that the proposed method achieves a 33%~76% reduction of convergence speed in three environments and an 11% increase in returns and a 3%~10% increase in success rate for other three environments.
translated by 谷歌翻译
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
translated by 谷歌翻译